Deploying Large Language Models (LLMs) on Google Cloud Platform
Large language models (LLMs), like ChatGPT, are rapidly gaining popularity due to their conversational abilities and natural language understanding. [ more ]
Enhancing Developer Experience for Creating Artificial Intelligence Applications
Large language models like LLMs such as ChatGPT revolutionized AI approach, reducing complexity and costs significantly for developers without needing AI expertise. [ more ]
Open Data Science - Your News Source for AI, Machine Learning & more
What to expect of Microsoft's "cheaper generative AI" efforts - TechTalks
Microsoft forms a new team to develop cheaper generative AI systems to reduce its dependence on OpenAI's expensive large language models (LLMs).
The market for LLMs is becoming commoditized, and there is growing interest in open LLMs and small language models (SLMs) that run on phones and personal computers. [ more ]
How AI companies are trying to solve the LLM hallucination problem
Large language models like ChatGPT have shown a tendency to generate false or made-up information, which presents legal and reputational risks for companies.
Companies are now racing to develop solutions to minimize the damage caused by hallucinations in language models. [ more ]
"Hallucinating" AI models help coin Cambridge Dictionary's word of the year
Cambridge Dictionary has chosen 'hallucinate' as its 2023 word of the year due to the popularity of large language models (LLMs) like ChatGPT that sometimes produce false information.
The definition of 'hallucinate' in relation to AI originated as a term in the machine learning space, but has spilled over into general use causing confusion and unnecessary anthropomorphism.
The term 'hallucinate' highlights the need to understand both the strengths and weaknesses of generative AI in order to interact with it safely and effectively. [ more ]
AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations
Large language models (LLMs) could potentially be modified to operate in space for communication with extraterrestrial life, a lighthearted proposal by AI researcher Andrej Karpathy. [ more ]
Navigating LLM Deployment: Tips, Tricks and Techniques by Meryem Arik at Qcon London
Initial proofs of concept benefit from hosted solutions, but self-hosting is necessary for scaling models to cut costs, enhance performance, and meet security needs.
Using quantization and optimizing inference can help maximize GPU resources and efficiency in deploying Large Language Models. [ more ]
AI chatbots are improving at an even faster rate than computer chips
AI chatbots are evolving rapidly, requiring half the computing power in just eight months compared to the rate of improvement in computer chips. [ more ]
AI-generated legal outputs often contain errors and falsehoods, leading to real-world consequences.
Hallucination, where AI models produce responses that don't align with reality, poses a significant challenge in the use of large language models. [ more ]
The AI experts who believe the AI boom could fizzle out
Generative AI may not meet expectations in boosting the world economy as initially predicted by Goldman Sachs.
Large Language Models like GPT-4 might be reaching a plateau in terms of capability, with no significantly more powerful models launched after. [ more ]
Ema, a 'Universal AI employee', emerges from stealth with $25M | TechCrunch
Generative AI startup Ema aims to revolutionize work processes with AI solutions like GWE and EmaFusion.
Ema differentiates itself from traditional AI tools by combining Large Language Models and domain-specific models for accuracy and data protection. [ more ]
The Honor Magic6 Pro focuses on AI - Here are things we like - Yanko Design
HONOR launched HONOR Magic6 Pro smartphone and HONOR MagicBook Pro 16 AI PC at MWC 2024 showcasing advancements in AI technology.
HONOR Magic6 Pro features Magic Capsule and Magic Portal for enhanced user experience, and collaborations integrate large language models for offline tasks. [ more ]
AI safeguards can easily be broken, UK Safety Institute finds
The UK's AI Safety Institute found that advanced AI systems can deceive users, produce biased outcomes, and have inadequate safeguards.
Basic prompts were able to bypass safeguards for large language models (LLMs), and more sophisticated techniques took just a couple of hours, accessible to low-skilled actors.
LLMs could be used to plan cyber-attacks, produce convincing social media personas, and generate racially biased outcomes. [ more ]
How to build better AI products with user research
Advancements in computing power and availability of extensive datasets have driven breakthroughs in generative AI and Large Language Models (LLMs).
Embedding AI into every product feature without understanding user needs can lead to disappointing results and potential risks to user privacy and safety. [ more ]
Open Data Science - Your News Source for AI, Machine Learning & more
Pentagon's new bug bounty seeks to find bias in AI systems
The Pentagon's Chief Digital and Artificial Intelligence Office (CDAO) has launched a bug bounty exercise to detect biases in AI systems, with a focus on large language models.
The exercise will run from Jan. 29 through Feb. 27 and will identify unknown areas of risk in large language models, starting with open-source chatbots.
The bug bounty exercise is being overseen by ConductorAI and Bugcrowd and offers a $24,000 pot for identifying instances of protected class bias. [ more ]
AI models frequently 'hallucinate' on legal queries, study finds
Generative AI models frequently produce false legal information, with hallucinations occurring between 69% to 88% of the time.
The pervasive nature of these legal hallucinations raises significant concerns about the reliability of using large language models (LLMs) in the field. [ more ]
Unveiling the Dark Side of AI: How Prompt Hacking Can Sabotage Your AI Systems
Prompt hacking is a cybersecurity threat that involves manipulating the initial prompt given to a language model to potentially access sensitive information.
Prompt hacking poses a significant threat to data privacy and security in production systems that house sensitive data.
Understanding and mitigating the risks associated with prompt hacking is critical for businesses leveraging large language models. [ more ]
The company aims to address cybersecurity challenges posed by Large Language Models (LLMs) through comprehensive protection and innovative security solutions.
Lasso plans to use the funding to expand its team and improve its offerings. [ more ]